-
Notifications
You must be signed in to change notification settings - Fork 97
[server][dvc] change the client side transfer timeout configurable and close channel once timeout. #1805
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
jingy-li
merged 9 commits into
linkedin:main
from
jingy-li:fix-cert0-large-store-timeout-bug
May 21, 2025
Merged
[server][dvc] change the client side transfer timeout configurable and close channel once timeout. #1805
jingy-li
merged 9 commits into
linkedin:main
from
jingy-li:fix-cert0-large-store-timeout-bug
May 21, 2025
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
…d close channel once timeout.
…hen the channel becomes inactive.
gaojieliu
reviewed
May 16, 2025
...nts/da-vinci-client/src/main/java/com/linkedin/davinci/blobtransfer/BlobSnapshotManager.java
Outdated
Show resolved
Hide resolved
...i-client/src/main/java/com/linkedin/davinci/blobtransfer/client/NettyFileTransferClient.java
Outdated
Show resolved
Hide resolved
gaojieliu
reviewed
May 20, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should add a test to simulate the concurrent blob transfer requests to make sure the global limit work.
...ent/src/main/java/com/linkedin/davinci/blobtransfer/server/P2PFileTransferServerHandler.java
Show resolved
Hide resolved
...ent/src/main/java/com/linkedin/davinci/blobtransfer/server/P2PFileTransferServerHandler.java
Outdated
Show resolved
Hide resolved
gaojieliu
approved these changes
May 21, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thanks!
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Problem Statement
When onboarding the blob transfer bootstrap feature to a large store (e.g., 10GB per partition, 120GB per host), the transfer time is so long that it triggers a client-side timeout exception. Upon reaching the timeout, a partition cleanup is performed before moving to the next host.
However, during the cleanup process, the channels are not closed, and Netty continues receiving transferred files. If files are being cleaned up while validation is happening, checksum failures occur, resulting in checksum errors. These failures trigger the exceptionCaught method, which eventually leads to the channel being closed.
As a result, incomplete cleanups occur—some files are deleted, but others that are still being transferred or created after the cleanup begins remain. This race condition arises because file transfers and cleanups are happening concurrently.
Ultimately, even if the blob transfer fails and the bootstrap falls back to Kafka ingestion, the incomplete cleanup leads to database corruption due to residual files.
Solution
Code changes
Concurrency-Specific Checks
Both reviewer and PR author to verify
synchronized
,RWLock
) are used where needed.ConcurrentHashMap
,CopyOnWriteArrayList
).How was this PR tested?
Does this PR introduce any user-facing or breaking changes?